Moderate: Red Hat Ceph Storage 3.3 security, bug fix, and enhancement update

Synopsis

Moderate: Red Hat Ceph Storage 3.3 security, bug fix, and enhancement update

Type/Severity

Security Advisory: Moderate

Topic

An update is now available for Red Hat Ceph Storage 3.3 on Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • ceph: ListBucket max-keys has no defined limit in the RGW codebase (CVE-2018-16846)
  • ceph: debug logging for v4 auth does not sanitize encryption keys (CVE-2018-16889)
  • ceph: authenticated user with read only permissions can steal dm-crypt / LUKS key (CVE-2018-14662)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es) and Enhancement(s):

For detailed information on changes in this release, see the Red Hat Ceph Storage 3.3 Release Notes available at:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3.3/html/release_notes/index

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Ceph Storage 3 x86_64
  • Red Hat Ceph Storage MON 3 x86_64
  • Red Hat Ceph Storage OSD 3 x86_64
  • Red Hat Ceph Storage for Power 3 ppc64le
  • Red Hat Ceph Storage MON for Power 3 ppc64le
  • Red Hat Ceph Storage OSD for Power 3 ppc64le

Fixes

  • BZ - 1337915 - purge-cluster.yml confused by presence of ceph installer, ceph kernel threads
  • BZ - 1572933 - infrastructure-playbooks/shrink-osd.yml leaves behind NVMe partition; scenario non-collocated
  • BZ - 1599852 - radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --purge-objects not cleaning up objects in secondary site
  • BZ - 1627567 - MDS fails heartbeat map due to export size
  • BZ - 1628309 - MDS should handle large exports in parts
  • BZ - 1628311 - MDS balancer may stop prematurely
  • BZ - 1631010 - batch: allow journal+block.db sizing on the CLI
  • BZ - 1636136 - [cee/sd] add ceph_docker_registry to group_vars/all.yml.sample same way as ceph-ansible does allowing custom registry for systems without direct internet access
  • BZ - 1637327 - CVE-2018-14662 ceph: authenticated user with read only permissions can steal dm-crypt / LUKS key
  • BZ - 1639712 - dynamic bucket resharding unexpected behavior
  • BZ - 1644321 - lvm scenario - stderr: Device /dev/sdb excluded by a filter
  • BZ - 1644461 - CVE-2018-16846 ceph: ListBucket max-keys has no defined limit in the RGW codebase
  • BZ - 1644610 - [RFE] allow --no-systemd flag for 'simple' sub-command
  • BZ - 1644847 - [RFE] ceph-volume zap enhancements based on the OSD ID instead of a device
  • BZ - 1651054 - [iSCSI-container] - After cluster purge and recreation, iSCSI target creation failed.
  • BZ - 1656908 - [ceph-ansible] Ceph nfs installation fails at task start nfs gateway service in ubuntu ipv6 deployment
  • BZ - 1659611 - ceph ansible rolling upgrade does not restart tcmu-runner and rbd-target-api
  • BZ - 1661504 - [RFE] append x-amz-version-id in PUT response
  • BZ - 1665334 - CVE-2018-16889 ceph: debug logging for v4 auth does not sanitize encryption keys
  • BZ - 1666822 - ceph-volume does not always populate dictionary key rotational
  • BZ - 1668478 - Failed to Purge Cluster
  • BZ - 1668896 - Ability to search by access-key using the radosgw-admin tool [Consulting]
  • BZ - 1668897 - Ability to register/associate one email to multiple user accounts [Consulting]
  • BZ - 1669838 - [RFE] Including some rgw bits in mgr-restful plugin
  • BZ - 1670527 - if LVM is not installed containers don't come up after a system reboot
  • BZ - 1670785 - rbd-target-api.service doesn't get started after starting rbd-target-gw.service.
  • BZ - 1677269 - Need to add port 9283/tcp to /usr/share/cephmetrics-ansible/roles/ceph-node-exporter/tasks/configure_firewall.yml
  • BZ - 1680144 - [RFE] RGW metadata search support for elastic search 6.0 API changes
  • BZ - 1680155 - ceph-ansible is configuring VIP address for MON and RGW
  • BZ - 1685253 - ceph-ansible non-collocated OSD scenario should not create block.wal by default
  • BZ - 1685734 - MDS `cache drop` command does not timeout as expected
  • BZ - 1686306 - [ceph-ansible] shrink-osd.yml fails at stopping osd service task
  • BZ - 1695850 - ceph-ansible containerized Ceph MDS is limited to 1 CPU core by default - not enough
  • BZ - 1696227 - [RFE] print client IP in default debug_ms log level when "bad crc in {front|middle|data}" occurs
  • BZ - 1696691 - [CEE/SD] 'ceph osd in any' marks all osds 'in' even if the osds are removed completely from the Ceph cluster.
  • BZ - 1696880 - ceph ansible 3.x still sets memory option if
  • BZ - 1700896 - Update nfs-ganesha to 2.7.4
  • BZ - 1701029 - [RFE] GA support for ASIO/Beast HTTP Frontend
  • BZ - 1702091 - nofail option is unsupported in the kernel driver
  • BZ - 1702092 - MDS may report spurious warning during subtree migration
  • BZ - 1702093 - MDS may hit an assertion during shutdown
  • BZ - 1702097 - MDS does not initialize based on config mds_cap_revoke_eviction_timeout
  • BZ - 1702099 - MDS may return ENOSPC for a series of renames to a target directory
  • BZ - 1702100 - MDS may crash during reconnect when processing reconnect message
  • BZ - 1702285 - It takes significantly longer to deploy bluestore than filestore on the same hardware
  • BZ - 1702732 - [ceph-ansible] - group_vars files says that default values are based in RHCS 2.x hardware guide
  • BZ - 1703557 - rgw: object expirer: handle resharded buckets
  • BZ - 1704948 - [Rebase] rebase ceph to 12.2.12
  • BZ - 1705258 - RGW: expiration_date returned from lifecycle is in wrong format. [Consulting]
  • BZ - 1705922 - Getting versioning state of non-existing bucket returns HTTP Response 200
  • BZ - 1708346 - Memory growth when enabling rgw_enable_ops_log = True with no consumption of queue
  • BZ - 1708650 - PUT Bucket Lifecycle doesn't clear existing lifecycle policy
  • BZ - 1708798 - rgw: luminous: keystone: backport keystone S3 credential caching
  • BZ - 1709765 - [RGW]: Radosgw unable to start post upgrade to latest Luminous build
  • BZ - 1710855 - nfs ganesha crashed due to invalid rgw_fh pointer passed by FSAL_RGW ?
  • BZ - 1713779 - rgw-multisite: 'radosgw-admin bilog trim' stops after 1000 entries
  • BZ - 1714810 - MDS may hang during up:rejoin while iterating inodes
  • BZ - 1714814 - MDS may try trimming all of its journal at once after recovery
  • BZ - 1715577 - [Consulting] Ceph Balancer not working with EC/upmap configuration
  • BZ - 1715946 - [RGW-NFS]: objects stored on nfs mount may have inconsistent tail tag and fail to gc
  • BZ - 1717135 - S3 client timed out in RGW - listing the large buckets having ~14 million objects with 256 bucket index shards
  • BZ - 1718135 - Multiple MDS crashing with assert(mds->sessionmap.get_version() == cmapv) in ESessions::replay while replaying journal
  • BZ - 1718328 - S3 client timed out in RGW while listing buckets having 2 million to 5 million objects.
  • BZ - 1719023 - ceph-validate : devices are not validated in non-collocated and lvm_batch scenario
  • BZ - 1720205 - [GSS] MONs continuously calling for election on lease expiry
  • BZ - 1720741 - [RGW] bucket_list on large bucket causing application to not startup, and performance impact on all other clients using RGW
  • BZ - 1721165 - MDS session reference count may leak due to regression in 12.2.11
  • BZ - 1722663 - ceph-ansible: purge-cluster.yml fails when initiated second time
  • BZ - 1722664 - radosgw-admin bucket rm fails to remove a bucket with error "aborted 152 incomplete multipart uploads"
  • BZ - 1725521 - Config parser error when import rados config which larger than 1024 bytes
  • BZ - 1725536 - few OSDs are not coming up and log error "In function 'void KernelDevice::_aio_thread()' thread 7f3e4ead9700 ... bluestore/KernelDevice.cc: 397: FAILED assert(0 == "unexpected aio error"
  • BZ - 1732142 - [RFE] Changing BlueStore OSD rocksdb_cache_size default value to 512MB for helping in compaction
  • BZ - 1732706 - [RGW-NFS]: nfs-ganesha aborts due to "Cannot acquire credentials for principal nfs"
  • BZ - 1734550 - GetBucketLocation on non-existing bucket doesn't throw NoSuchBucket and gives 200
  • BZ - 1739209 - [ceph-ansible] - rolling-update of containerized cluster from 2.x to 3.x failed trying to run systemd-device-to-id.sh saying no such file

CVEs

References